12 research outputs found

    Fast Algorithms for Sampled Multiband Signals

    Get PDF
    Over the past several years, computational power has grown tremendously. This has led to two trends in signal processing. First, signal processing problems are now posed and solved using linear algebra, instead of traditional methods such as filtering and Fourier transforms. Second, problems are dealing with increasingly large amounts of data. Applying tools from linear algebra to large scale problems requires the problem to have some type of low-dimensional structure which can be exploited to perform the computations efficiently. One common type of signal with a low-dimensional structure is a multiband signal, which has a sparsely supported Fourier transform. Transferring this low-dimensional structure from the continuous-time signal to the discrete-time samples requires care. Naive approaches involve using the FFT, which suffers from spectral leakage. A more suitable method to exploit this low-dimensional structure involves using the Slepian basis vectors, which are useful in many problems due to their time-frequency localization properties. However, prior to this research, no fast algorithms for working with the Slepian basis had been developed. As such, practitioners often overlooked the Slepian basis vectors for more computationally efficient tools, such as the FFT, even in problems for which the Slepian basis vectors are a more appropriate tool. In this thesis, we first study the mathematical properties of the Slepian basis, as well as the closely related discrete prolate spheroidal sequences and prolate spheroidal wave functions. We then use these mathematical properties to develop fast algorithms for working with the Slepian basis, a fast algorithm for reconstructing a multiband signal from nonuniform measurements, and a fast algorithm for reconstructing a multiband signal from compressed measurements. The runtime and memory requirements for all of our fast algorithms scale roughly linearly with the number of samples of the signal.Ph.D

    Neural Network Approximation of Continuous Functions in High Dimensions with Applications to Inverse Problems

    Full text link
    The remarkable successes of neural networks in a huge variety of inverse problems have fueled their adoption in disciplines ranging from medical imaging to seismic analysis over the past decade. However, the high dimensionality of such inverse problems has simultaneously left current theory, which predicts that networks should scale exponentially in the dimension of the problem, unable to explain why the seemingly small networks used in these settings work as well as they do in practice. To reduce this gap between theory and practice, we provide a general method for bounding the complexity required for a neural network to approximate a H\"older (or uniformly) continuous function defined on a high-dimensional set with a low-complexity structure. The approach is based on the observation that the existence of a Johnson-Lindenstrauss embedding A∈RdΓ—DA\in\mathbb{R}^{d\times D} of a given high-dimensional set SβŠ‚RDS\subset\mathbb{R}^D into a low dimensional cube [βˆ’M,M]d[-M,M]^d implies that for any H\"older (or uniformly) continuous function f:Sβ†’Rpf:S\to\mathbb{R}^p, there exists a H\"older (or uniformly) continuous function g:[βˆ’M,M]dβ†’Rpg:[-M,M]^d\to\mathbb{R}^p such that g(Ax)=f(x)g(Ax)=f(x) for all x∈Sx\in S. Hence, if one has a neural network which approximates g:[βˆ’M,M]dβ†’Rpg:[-M,M]^d\to\mathbb{R}^p, then a layer can be added that implements the JL embedding AA to obtain a neural network that approximates f:Sβ†’Rpf:S\to\mathbb{R}^p. By pairing JL embedding results along with results on approximation of H\"older (or uniformly) continuous functions by neural networks, one then obtains results which bound the complexity required for a neural network to approximate H\"older (or uniformly) continuous functions on high dimensional sets. The end result is a general theoretical framework which can then be used to better explain the observed empirical successes of smaller networks in a wider variety of inverse problems than current theory allows.Comment: 26 pages, 1 figur

    Tensor Sandwich: Tensor Completion for Low CP-Rank Tensors via Adaptive Random Sampling

    Full text link
    We propose an adaptive and provably accurate tensor completion approach based on combining matrix completion techniques (see, e.g., arXiv:0805.4471, arXiv:1407.3619, arXiv:1306.2979) for a small number of slices with a modified noise robust version of Jennrich's algorithm. In the simplest case, this leads to a sampling strategy that more densely samples two outer slices (the bread), and then more sparsely samples additional inner slices (the bbq-braised tofu) for the final completion. Under mild assumptions on the factor matrices, the proposed algorithm completes an nΓ—nΓ—nn \times n \times n tensor with CP-rank rr with high probability while using at most O(nrlog⁑2r)\mathcal{O}(nr\log^2 r) adaptively chosen samples. Empirical experiments further verify that the proposed approach works well in practice, including as a low-rank approximation method in the presence of additive noise.Comment: 6 pages, 5 figures. Sampling Theory and Applications Conference 202

    Broadband Beamforming via Linear Embedding

    Full text link
    In modern applications multi-sensor arrays are subject to an ever-present demand to accommodate signals with higher bandwidths. Standard methods for broadband beamforming, namely digital beamforming and true-time delay, are difficult and expensive to implement at scale. In this work, we explore an alternative method of broadband beamforming that uses a set of linear measurements and a robust low-dimensional signal subspace model. The linear measurements, taken directly from the sensors, serve as a method for dimensionality reduction and serve to limit the array readout. From these embedded samples, we show how the original samples can be recovered to within a provably small residual error using a Slepian subspace model. Previous work in multi-sensor array subspace models have largely analyzed performance from a qualitative or asymptotic perspective. In contrast, we give quantitative estimates of how well different dimensionality reduction strategies preserve the array gain. We also show how spatial and temporal correlations can be used to relax the standard Nyquist sampling criterion, how recovery can be achieved through fast algorithms, and how "hardware friendly" linear measurements can be designed

    ROAST: Rapid Orthogonal Approximate Slepian Transform

    No full text
    corecore